Team, Visitors, External Collaborators
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Research axis 3: Management of Information in Neuroimaging

Large-scale data analyses

Objective Evaluation of Multiple Sclerosis Lesion Segmentation using a Data Management and Processing Infrastructure

Participants : O. Commowick, M. Kain, F. Leray, M. Simon, J.-C. Ferré, A. Kerbrat, G. Edan, C. Barillot.

In collaboration with OFSEP and France Life Imaging, we have proposed a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using France Life Imaging (FLI-IAM), a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-the-art algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning,...), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrated that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores [7]

This work was done in collaboration with A. Istace, B. Laurent, S. C. Pop, P. Girard, R. Ameli, T. Tourdias, F. Cervenansky, T. Glatard, J. Beaumont, S. Doyle, F. Forbes, J. Knight, A. Khademi, A. Mahbod, C. Wang, R. Mckinley, F. Wagner, J. Muschelli, E. Sweeney, E. Roura, X. Lladó, M. M. Santos, W. P. Santos, A. G. Silva-Filho, X. Tomas-Fernandez, H. Urien, I. Bloch, S. Valverde, M. Cabezas, F. J. Vera-Olmos, N. Malpica, C. R. G. Guttmann, S. Vukusic, M. Dojat, M. Styner, S. K. Warfield and F. Cotton.

Same Data - Different Software - Different Results? Analytic Variability of Group fMRI Results

Participant : Camille Maumet.

A wealth of analysis tools are available to fMRI researchers in order to extract patterns of task variation and, ultimately, understand cognitive function. However, this 'methodological plurality' comes with a drawback. While conceptually similar, two different analysis pipelines applied on the same dataset may not produce the same scientific results. Differences in methods, implementations across software packages, and even operating systems or software versions all contribute to this variability. Consequently, attention in the field has recently been directed to reproducibility and data sharing. Neuroimaging is currently experiencing a surge in initiatives to improve research practices and ensure that all conclusions inferred from an fMRI study are replicable. In this work, our goal was to understand how choice of software package impacts on analysis results. We used publically shared data from three published task fMRI neuroimaging studies, reanalyzing each study using the three main neuroimaging software packages, AFNI, FSL and SPM, using parametric and nonparametric inference. We obtained all information on how to process, analyze, and model each dataset from the publications. We made quantitative and qualitative comparisons between our replications to gauge the scale of variability in our results and assess the fundamental differences between each software package. While qualitatively we found broad similarities between packages, we also discovered marked differences, such as Dice similarity coefficients ranging from 0.000-0.743 in comparisons of thresholded statistic maps between software [28], [48].

This work was done in collaboration with Alexander Bowring and Prof. Thomas Nichols from the Oxford Big Data Institute in the UK.

Detecting and Interpreting Heterogeneity and Publication Bias in Image-Based Meta-Analyses

Participant : Camille Maumet.

With the increase of data sharing, meta-analyses are becoming increasingly important in the neuroimaging community. They provide a quantitative summary of published results and heightened confidence due to higher statistical power. The gold standard approach to combine results from neuroimaging studies is an Image-Based Meta-Analysis (IBMA) [1] in which group-level maps from different studies are combined. Recently, we have introduced the IBMA toolbox, an extension for SPM that provides methods for combining image maps from multiple studies [2]. However, the current toolbox lacks diagnostic tools used to assess critical assumptions of meta-analysis, in particular whether there is inter-study variation requiring random-effects IBMA, and whether publication bias is present. We have proposed two new tools added to the IBMA toolbox to detect heterogeneity and to assess evidence of publication bias [40].

This work was done in collaboration with Thomas Maullin-Saper and Prof. Thomas Nichols from the Oxford Big Data Institute in the UK.

Infrastructures

Open Science for the Neuroinformatics community

Participants : Camille Maumet, Xavier Rolland, Michael Kain, Christian Barillot.

The Neuroinformatics community in OpenAire-Connect is represented by members of the France Life Imaging (FLI) collaboration. In this context, we aim at leveraging OpenAire-Connect services and give our community members the possibility to easily publish and exchange research artefacts from FLI platforms, such as VIP and Shanoir. This will enable open and reproducible science, since literature, data, and methods can be linked, retrieved, and replayed by all the members of the community [30].

This work was done in collaboration with Sorina Pop, Axel Bonnet and Tristan Glatard.

Standardisation and interoperability

Interoperability with Boutiques and CARMIN

Participants : Camille Maumet, Michael Kain, Christian Barillot.

A growing number of platforms and tools have lately been developed to meet the needs of various scientific communities. Most of these solutions are optimized to specific requirements from different user groups, leading to technological fragmentation and lack of interoperability. In our quest of open and reproducible science, we proposed two complementary tools, Boutiques and CARMIN, providing cross-platform interoperability for scientific applications, data sharing and processing [29].

This work was done in collaboration with Sorina Pop, Axel Bonnet and Tristan Glatard.

A standardised representation for non-parametric fMRI results

Participant : Camille Maumet.

Reuse of data collected and analysed at another site is becoming more prevalent in the neuroimaging community but this process usually relies on intensive data and metadata curation. Given the ever-increasing number of research datasets produced and shared, it is desirable to rely on standards that will enable automatic data and metadata retrieval for large-scale analyses. We recently introduced NIDM-Results, a data model to represent and publish data and metadata created as part of a mass univariate neuroimaging study (typically functional magnetic resonance imaging). In this work, we have proposed to extend this model to allow for the representation of non-parametric analyses and we introduce a JSON API that will facilitate export into NIDM-Results [25].

This work was done as part of an international collaboration with Guillaume Flandin, Martin Perez-Guevara, Jean-Baptiste Poline, Justin Rajendra, Richard Reynolds, Betrand Thirion, Thomas Maullin-Sapey and Thomas Nichols.

Development of an Ontology for the INCF Neuroimaging Data Model (NIDM)

Participant : C. Maumet.

The successful reuse of shared data relies on the existence of easily-available well-described metadata. The metadata, as a rich description of the data, must capture information on how the data was acquired, processed and analyzed. The terms used to describe the data should be chosen with a logical, consistent framework in mind and include definitions to avoid ambiguity. In addition, a lexicon or ontology should reuse terms from existing efforts as much as possible [38].

This work was done as part of an international collaboration with K.G. Helmer, K.B. Keator, T. Auer, S. Ghosh, T.E. Nichols, P. Smruti and J.B. Poline.